1,321 research outputs found

    Net Reclassification Indices for Evaluating Risk Prediction Instruments: A Critical Review

    Get PDF
    Background Net Reclassification Indices (NRI) have recently become popular statistics for measuring the prediction increment of new biomarkers. Methods In this review, we examine the various types of NRI statistics and their correct interpretations. We evaluate the advantages and disadvantages of the NRI approach. For pre-defined risk categories, we relate NRI to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for NRI statistics and evaluate the merits of NRI-based hypothesis testing. Conclusions Investigators using NRI statistics should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the NRI components are the same as the changes in the true and false positive rates. We advocate use of true and false positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against NRI statistics because they do not adequately account for clinically important differences in movements among risk categories. The category-free NRI is a new descriptive device designed to avoid pre-defined risk categories. The category-free NRI suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free NRI can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the NRI. If investigators want to use NRI measures, their confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in the Net Benefit

    Ectopy on a single 12‐lead ECG, incident cardiac myopathy, and death in the community

    Get PDF
    BackgroundAtrial fibrillation and heart failure are 2 of the most common diseases, yet ready means to identify individuals at risk are lacking. The 12-lead ECG is one of the most accessible tests in medicine. Our objective was to determine whether a premature atrial contraction observed on a standard 12-lead ECG would predict atrial fibrillation and mortality and whether a premature ventricular contraction would predict heart failure and mortality.Methods and resultsWe utilized the CHS (Cardiovascular Health) Study, which followed 5577 participants for a median of 12 years, as the primary cohort. The ARIC (Atherosclerosis Risk in Communities Study), the replication cohort, captured data from 15 792 participants over a median of 22 years. In the CHS, multivariable analyses revealed that a baseline 12-lead ECG premature atrial contraction predicted a 60% increased risk of atrial fibrillation (hazard ratio, 1.6; 95% CI, 1.3-2.0; P<0.001) and a premature ventricular contraction predicted a 30% increased risk of heart failure (hazard ratio, 1.3; 95% CI, 1.0-1.6; P=0.021). In the negative control analyses, neither predicted incident myocardial infarction. A premature atrial contraction was associated with a 30% increased risk of death (hazard ratio, 1.3; 95% CI, 1.1-1.5; P=0.008) and a premature ventricular contraction was associated with a 20% increased risk of death (hazard ratio, 1.2; 95% CI, 1.0-1.3; P=0.044). Similarly statistically significant results for each analysis were also observed in ARIC.ConclusionsBased on a single standard ECG, a premature atrial contraction predicted incident atrial fibrillation and death and a premature ventricular contraction predicted incident heart failure and death, suggesting that this commonly used test may predict future disease

    Contribution of Major Lifestyle Risk Factors for Incident Heart Failure in Older Adults: The Cardiovascular Health Study.

    Get PDF
    OBJECTIVES: The goal of this study was to determine the relative contribution of major lifestyle factors on the development of heart failure (HF) in older adults. BACKGROUND: HF incurs high morbidity, mortality, and health care costs among adults ≥65 years of age, which is the most rapidly growing segment of the U.S. METHODS: We prospectively investigated separate and combined associations of lifestyle risk factors with incident HF (1,380 cases) over 21.5 years among 4,490 men and women in the Cardiovascular Health Study, which is a community-based cohort of older adults. Lifestyle factors included 4 dietary patterns (Alternative Healthy Eating Index, Dietary Approaches to Stop Hypertension, an American Heart Association 2020 dietary goals score, and a Biologic pattern, which was constructed using previous knowledge of cardiovascular disease dietary risk factors), 4 physical activity metrics (exercise intensity, walking pace, energy expended in leisure activity, and walking distance), alcohol intake, smoking, and obesity. RESULTS: No dietary pattern was associated with developing HF (p > 0.05). Walking pace and leisure activity were associated with a 26% and 22% lower risk of HF, respectively (pace >3 mph vs. <2 mph; hazard ratio [HR]: 0.74; 95% confidence interval [CI]: 0.63 to 0.86; leisure activity ≥845 kcal/week vs. <845 kcal/week; HR: 0.78; 95% CI: 0.69 to 0.87). Modest alcohol intake, maintaining a body mass index <30 kg/m(2), and not smoking were also independently associated with a lower risk of HF. Participants with ≥4 healthy lifestyle factors had a 45% (HR: 0.55; 95% CI: 0.42 to 0.74) lower risk of HF. Heterogeneity by age, sex, cardiovascular disease, hypertension medication use, and diabetes was not observed. CONCLUSIONS: Among older U.S. adults, physical activity, modest alcohol intake, avoiding obesity, and not smoking, but not dietary patterns, were associated with a lower risk of HF.Role of the funding source: This research was supported by contracts HHSN268201200036C, HHSN268200800007C, N01 HC55222, N01HC85079, N01HC85080, N01HC85081, N01HC85082, N01HC85083, N01HC85086, and grant HL080295 from the National Heart, Lung, and Blood Institute (NHLBI), with additional contribution from the National Institute of Neurological Disorders and Stroke (NINDS). Additional support was provided by AG023629 from the National Institute on Aging (NIA). A full list of principal CHS investigators and institutions can be found at CHS-NHLBI.org. Fumiaki Imamura was supported by Medical Research Council Unit Programme number MC_UU_125015/5.This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.jchf.2015.02.00

    Net Reclassification Index: a Misleading Measure of Prediction Improvement

    Get PDF
    The evaluation of biomarkers to improve risk prediction is a common theme in modern research. Since its introduction in 2008, the net reclassification index (NRI) (Pencina et al. 2008, Pencina et al. 2011) has gained widespread use as a measure of prediction performance with over 1,200 citations as of June 30, 2013. The NRI is considered by some to be more sensitive to clinically important changes in risk than the traditional change in the AUC (Delta AUC) statistic (Hlatky et al. 2009). Recent statistical research has raised questions, however, about the validity of conclusions based on the NRI. (Hilden and Gerds 2013, Pepe et al. 2013) Here we illustrate one serious problem, that unlike classic measures of prediction performance, the NRI can provide a biased assessment of prediction performance even with independent validation data

    Association of alcohol consumption after development of heart failure with survival among older adults in the Cardiovascular Health Study

    Get PDF
    Importance: More than 1 million older adults develop heart failure annually. The association of alcohol consumption with survival among these individuals after diagnosis is unknown. Objective: To determine whether alcohol use is associated with increased survival among older adults with incident heart failure. Design, Setting, and Participants: This prospective cohort study included 5888 community-dwelling adults aged 65 years or older who were recruited to participate in the Cardiovascular Health Study between June 12, 1989, and June 1993, from 4 US sites. Of the total participants, 393 individuals had a new diagnosis of heart failure within the first 9 years of follow-up through June 2013. The study analysis was performed between January 19, 2016, and September 22, 2016. Exposures: Alcohol consumption was divided into 4 categories: abstainers (never drinkers), former drinkers, 7 or fewer alcoholic drinks per week, and more than 7 drinks per week. Primary Outcomes and Measures: Participant survival after the diagnosis of incident heart failure. Results: Among the 393 adults diagnosed with incident heart failure, 213 (54.2%) were female, 339 (86.3%) were white, and the mean (SD) age was 78.7 (6.0) years. Alcohol consumption after diagnosis was reported in 129 (32.8%) of the participants. Across alcohol consumption categories of long-term abstainers, former drinkers, consumers of 1-7 drinks weekly and consumers of more than 7 drinks weekly, the percentage of men (32.1%, 49.0%, 58.0%, and 82.4%, respectively; P \u3c .001 for trend), white individuals (78.0%, 92.7%, 92.0%, and 94.1%, respectively, P \u3c. 001 for trend), and high-income participants (22.0%, 43.8%, 47.3%, and 64.7%, respectively; P \u3c .001 for trend) increased with increasing alcohol consumption. Across the 4 categories, participants who consumed more alcohol had more years of education (mean, 12 years [interquartile range (IQR), 8.0-10.0 years], 12 years [IQR, 11.0-14.0 years], 13 years [IQR, 12.0-15.0 years], and 13 years [IQR, 12.0-14.0 years]; P \u3c .001 for trend). Diabetes was less common across the alcohol consumption categories (32.1%, 26.0%, 22.3%, and 5.9%, respectively; P = .01 for trend). Across alcohol consumption categories, there were fewer never smokers (58.3%, 44.8%, 35.7%, and 29.4%, respectively; P \u3c .001 for trend) and more former smokers (34.5%, 38.5%, 50.0%, and 52.9%, respectively; P = .006 for trend). After controlling for other factors, consumption of 7 or fewer alcoholic drinks per week was associated with additional mean survival of 383 days (95% CI, 17-748 days; P = .04) compared with abstinence from alcohol. Although the robustness was limited by the small number of individuals who consumed more than 7 drinks per week, a significant inverted U-shaped association between alcohol consumption and survival was observed. Multivariable model estimates of mean time from heart failure diagnosis to death were 2640 days (95% CI, 1967-3313 days) for never drinkers, 3046 days (95% CI, 2372-3719 days) for consumers of 0 to 7 drinks per week, and 2806 (95% CI, 1879-3734 days) for consumers of more than 7 drinks per week (P = .02). Consumption of 10 drinks per week was associated with the longest survival, a mean of 3381 days (95% CI, 2806-3956 days) after heart failure diagnosis. Conclusions and Relevance: These findings suggest that limited alcohol consumption among older adults with incident heart failure is associated with survival benefit compared with long-term abstinence. These findings suggest that older adults who develop heart failure may not need to abstain from moderate levels of alcohol consumption

    Using built environment characteristics to predict walking for exercise

    Get PDF
    Background: Environments conducive to walking may help people avoid sedentary lifestyles and associated diseases. Recent studies developed walkability models combining several built environment characteristics to optimally predict walking. Developing and testing such models with the same data could lead to overestimating one's ability to predict walking in an independent sample of the population. More accurate estimates of model fit can be obtained by splitting a single study population into training and validation sets (holdout approach) or through developing and evaluating models in different populations. We used these two approaches to test whether built environment characteristics near the home predict walking for exercise. Study participants lived in western Washington State and were adult members of a health maintenance organization. The physical activity data used in this study were collected by telephone interview and were selected for their relevance to cardiovascular disease. In order to limit confounding by prior health conditions, the sample was restricted to participants in good self-reported health and without a documented history of cardiovascular disease. Results: For 1,608 participants meeting the inclusion criteria, the mean age was 64 years, 90 percent were white, 37 percent had a college degree, and 62 percent of participants reported that they walked for exercise. Single built environment characteristics, such as residential density or connectivity, did not significantly predict walking for exercise. Regression models using multiple built environment characteristics to predict walking were not successful at predicting walking for exercise in an independent population sample. In the validation set, none of the logistic models had a C-statistic confidence interval excluding the null value of 0.5, and none of the linear models explained more than one percent of the variance in time spent walking for exercise. We did not detect significant differences in walking for exercise among census areas or postal codes, which were used as proxies for neighborhoods. Conclusion: None of the built environment characteristics significantly predicted walking for exercise, nor did combinations of these characteristics predict walking for exercise when tested using a holdout approach. These results reflect a lack of neighborhood-level variation in walking for exercise for the population studied.University of Washington Royalty Research fund award; by contracts R01-HL043201, R01-HL068639, and T32-HL07902 from the National Heart, Lung, and Blood Institute; and by grant R01-AG09556 from the National Institute on Aging

    The Value of Rare Genetic Variation in the Prediction of Common Obesity in European Ancestry Populations

    Get PDF
    Polygenic risk scores (PRSs) aggregate the effects of genetic variants across the genome and are used to predict risk of complex diseases, such as obesity. Current PRSs only include common variants (minor allele frequency (MAF) ≥1%), whereas the contribution of rare variants in PRSs to predict disease remains unknown. Here, we examine whether augmenting the standard common variant PRS (PRScommon) with a rare variant PRS (PRSrare) improves prediction of obesity. We used genome-wide genotyped and imputed data on 451,145 European-ancestry participants of the UK Biobank, as well as whole exome sequencing (WES) data on 184,385 participants. We performed single variant analyses (for both common and rare variants) and gene-based analyses (for rare variants) for association with BMI (kg/m2), obesity (BMI ≥ 30 kg/m2), and extreme obesity (BMI ≥ 40 kg/m2). We built PRSscommon and PRSsrare using a range of methods (Clumping+Thresholding [C+T], PRS-CS, lassosum, gene-burden test). We selected the best-performing PRSs and assessed their performance in 36,757 European-ancestry unrelated participants with whole genome sequencing (WGS) data from the Trans-Omics for Precision Medicine (TOPMed) program. The best-performing PRScommon explained 10.1% of variation in BMI, and 18.3% and 22.5% of the susceptibility to obesity and extreme obesity, respectively, whereas the best-performing PRSrare explained 1.49%, and 2.97% and 3.68%, respectively. The PRSrare was associated with an increased risk of obesity and extreme obesity (ORobesity = 1.37 per SDPRS, P obesity = 1.7x10-85; ORextremeobesity = 1.55 per SDPRS, P extremeobesity = 3.8x10-40), which was attenuated, after adjusting for PRScommon (ORobesity = 1.08 per SDPRS, P obesity = 9.8x10-6; ORextremeobesity= 1.09 per SDPRS, P extremeobesity = 0.02). When PRSrare and PRScommon are combined, the increase in explained variance attributed to PRSrare was small (incremental Nagelkerke R2 = 0.24% for obesity and 0.51% for extreme obesity). Consistently, combining PRSrare to PRScommon provided little improvement to the prediction of obesity (PRSrare AUC = 0.591; PRScommon AUC = 0.708; PRScombined AUC = 0.710). In summary, while rare variants show convincing association with BMI, obesity and extreme obesity, the PRSrare provides limited improvement over PRScommon in the prediction of obesity risk, based on these large populations
    corecore